首页> 外文OA文献 >Transformation-Grounded Image Generation Network for Novel 3D View Synthesis
【2h】

Transformation-Grounded Image Generation Network for Novel 3D View Synthesis

机译:新型三维视图的变换接地图像生成网络   合成

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

We present a transformation-grounded image generation network for novel 3Dview synthesis from a single image. Instead of taking a 'blank slate' approach,we first explicitly infer the parts of the geometry visible both in the inputand novel views and then re-cast the remaining synthesis problem as imagecompletion. Specifically, we both predict a flow to move the pixels from theinput to the novel view along with a novel visibility map that helps deal withocculsion/disocculsion. Next, conditioned on those intermediate results, wehallucinate (infer) parts of the object invisible in the input image. Inaddition to the new network structure, training with a combination ofadversarial and perceptual loss results in a reduction in common artifacts ofnovel view synthesis such as distortions and holes, while successfullygenerating high frequency details and preserving visual aspects of the inputimage. We evaluate our approach on a wide range of synthetic and real examples.Both qualitative and quantitative results show our method achievessignificantly better results compared to existing methods.
机译:我们提出了一种基于转换的图像生成网络,用于从单个图像进行新颖的3Dview合成。我们没有采用“空白”方法,而是先明确推断出在输入视图和新颖视图中可见的几何部分,然后将剩余的合成问题重新投影为图像完成。具体而言,我们都预测了将像素从输入移动到新颖视图的流程,以及有助于应对超灵巧/消灵的新颖可见性图。接下来,以这些中间结果为条件,对对象图像中的部分进行半透明化(推断)。除新的网络结构外,结合对抗性损失和感知损失进行训练还可以减少新颖视图合成中常见的伪像(例如失真和空洞),同时成功生成高频细节并保留输入图像的视觉效果。我们在大量的合成实例和真实实例中评估了我们的方法。定性和定量结果都表明,与现有方法相比,我们的方法取得了明显更好的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号